63 research outputs found
Simulazione di datacenter e sua validazione utilizzando tracce di Google
Sono state analizzate tracce rilasciate da Google riguardanti il funzionamento di uno dei suoi cluster allo scopo di capirne il funzionamento. In Omnet++ è stato implementato un modello di datacenter e lo si è validato confrontandone i risultati con quelli ottenibile dalle tracce di Google
A Big Data Analyzer for Large Trace Logs
Current generation of Internet-based services are typically hosted on large
data centers that take the form of warehouse-size structures housing tens of
thousands of servers. Continued availability of a modern data center is the
result of a complex orchestration among many internal and external actors
including computing hardware, multiple layers of intricate software, networking
and storage devices, electrical power and cooling plants. During the course of
their operation, many of these components produce large amounts of data in the
form of event and error logs that are essential not only for identifying and
resolving problems but also for improving data center efficiency and
management. Most of these activities would benefit significantly from data
analytics techniques to exploit hidden statistical patterns and correlations
that may be present in the data. The sheer volume of data to be analyzed makes
uncovering these correlations and patterns a challenging task. This paper
presents BiDAl, a prototype Java tool for log-data analysis that incorporates
several Big Data technologies in order to simplify the task of extracting
information from data traces produced by large clusters and server farms. BiDAl
provides the user with several analysis languages (SQL, R and Hadoop MapReduce)
and storage backends (HDFS and SQLite) that can be freely mixed and matched so
that a custom tool for a specific task can be easily constructed. BiDAl has a
modular architecture so that it can be extended with other backends and
analysis languages in the future. In this paper we present the design of BiDAl
and describe our experience using it to analyze publicly-available traces from
Google data clusters, with the goal of building a realistic model of a complex
data center.Comment: 26 pages, 10 figure
Distributed Lower Bounds for Ruling Sets
Given a graph , an -ruling set is a subset such that the distance between any two vertices in is at least
, and the distance between any vertex in and the closest vertex in
is at most . We present lower bounds for distributedly computing
ruling sets.
More precisely, for the problem of computing a -ruling set in the
LOCAL model, we show the following, where denotes the number of vertices,
the maximum degree, and is some universal constant independent of
and .
Any deterministic algorithm requires
rounds, for all . By optimizing , this implies a
deterministic lower bound of for all .
Any randomized algorithm requires rounds, for all . By optimizing
, this implies a randomized lower bound of
for all
.
For , this improves on the previously best lower bound of
rounds that follows from the 30-year-old bounds of Linial
[FOCS'87] and Naor [J.Disc.Math.'91]. For , i.e., for the problem of
computing a maximal independent set, our results improve on the previously best
lower bound of on trees, as our bounds already hold on
trees
- …